翻訳と辞書
Words near each other
・ T-cell vaccine
・ T-Center
・ T-Centralen
・ T-class ferry
・ T-class submarine
・ T-closeness
・ T-code
・ T-Coffee
・ T-coloring
・ T-comma
・ T-commerce
・ T-complex 1
・ T-Connection
・ T-cophony
・ T-criterion
T-distributed stochastic neighbor embedding
・ T-Dog
・ T-Dog (The Walking Dead)
・ T-Dre Player
・ T-duality
・ T-Engine Forum
・ T-FLEX CAD
・ T-Force
・ T-Force (disambiguation)
・ T-Force (film)
・ T-function
・ T-glottalization
・ T-group
・ T-group (mathematics)
・ T-groups


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

T-distributed stochastic neighbor embedding : ウィキペディア英語版
T-distributed stochastic neighbor embedding

t-distributed stochastic neighbor embedding (t-SNE) is a machine learning algorithm for dimensionality reduction developed by Laurens van der Maaten and Geoffrey Hinton. It is a nonlinear dimensionality reduction technique that is particularly well suited for embedding high-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points.
The t-SNE algorithm comprises two main stages. First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects have a high probability of being picked, whilst dissimilar points have an infinitesimal probability of being picked. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback–Leibler divergence between the two distributions with respect to the locations of the points in the map. Note that whilst the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this should be changed as appropriate.
t-SNE has been used in a wide range of applications, including computer security research, music analysis, cancer research, and bioinformatics.
== Details ==
Given a set of N high-dimensional objects \mathbf_1, \dots, \mathbf_N, t-SNE first computes probabilities p_ that are proportional to the similarity of objects \mathbf_i and \mathbf_j, as follows:
p_ = \frac_j\rVert^2 / 2\sigma_i^2)}_i - \mathbf_k\rVert^2 / 2\sigma_i^2)},
p_ = \frac}
The bandwidth of the Gaussian kernels \sigma_i, is set in such a way that the perplexity of the conditional distribution equals a predefined perplexity using a binary search. As a result, the bandwidth is adapted to the density of the data: smaller values of \sigma_i are used in denser parts of the data space.
t-SNE aims to learn a d-dimensional map \mathbf_1, \dots, \mathbf_N (with \mathbf_i \in \mathbb^d) that reflects the similarities p_ as well as possible. To this end, it measures similarities q_ between two points in the map \mathbf_i and \mathbf_j, using a very similar approach. Specifically, q_ is defined as:
q_ = \frac_j\rVert^2)^}_k - \mathbf_l\rVert^2)^}
Herein a heavy-tailed Student-t distribution is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map.
The locations of the points \mathbf_i in the map are determined by minimizing the (non-symmetric) Kullback–Leibler divergence of the distribution Q from the distribution P, that is:
KL(P||Q) = \sum_ p_ \, \log \frac}
The minimization of the Kullback–Leibler divergence with respect to the points \mathbf_i is performed using gradient descent. The result of this optimization is a map that reflects the similarities between the high-dimensional inputs well.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「T-distributed stochastic neighbor embedding」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.